6 research outputs found

    Essex-NLIP at MediaEval Predicting MediaMemorability 2020 Task

    Get PDF
    In this paper, we present the methods of approach and the main results from the Essex NLIP Team’s participation in the MediEval 2020 Predicting Media Memorability task. The task requires participants to build systems that can predict short-term and long-term memorability scores on real-world video samples provided. The focus of our approach is on the use of colour-based visual features as well as the use of the video annotation meta-data. In addition, hyper-parameter tuning was explored. Besides the simplicity of the methodology, our approach achieves competitive results. We investigated the use of different visual features. We assessed the performance of memorability scores through various regression models where Random Forest regression is our final model, to predict the memorability of videos

    NLIP-Essex-ITESM at ImageCLEFcaption 2021 task: Deep learning-based information retrieval and multi-label classification towards improving medical image understanding

    Get PDF
    This work presents the NLIP-Essex-ITESM team's participation in the concept detection sub-task of the ImageCLEFcaption 2021 task. We developed a method to predict health outcomes from medical images by processing concepts from radiology reports and their associated medical images. Our aim is to improved medical image understanding and provide sophisticated tools to automate the thorough analysis of multi-modal medical images. In this paper, two deep learning- and k-NN-based methods of a) Information Retrieval and b) Multi-label Classification were developed and assessed. In addition, a Densenet-121 and an EfficientNet were used to train and extract imaging features. Our team achieved the second-highest score when the Information Retrieval method was used (F1-score bench-marking was 0.469). Further investigations are underway in the setting of improving health outcome predictions from multi-modal medical images. Code and pre-trained models are available at https://github.com/fjpa121197/ImageCLEF2021

    The 2021 ImageCLEF Benchmark: Multimedia Retrieval in Medical, Nature, Internet and Social Media Applications

    Get PDF
    This paper presents the ideas for the 2021 ImageCLEF lab that will be organized as part of the Conference and Labs of the Evaluation Forum — CLEF Labs 2021 in Bucharest, Romania. ImageCLEF is an ongoing evaluation initiative (active since 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval of visual data with the aim of providing information access to large collections of images in various usage scenarios and domains. In 2021, the 19th edition of ImageCLEF will organize four main tasks: (i) a Medical task addressing visual question answering, a concept annotation and a tuberculosis classification task, (ii) a Coral task addressing the annotation and localisation of substrates in coral reef images, (iii) a DrawnUI task addressing the creation of websites from either a drawing or a screenshot by detecting the different elements present on the design and a new (iv) Aware task addressing the prediction of real-life consequences of online photo sharing. The strong participation in 2020, despite the COVID pandemic, with over 115 research groups registering and 40 submitting over 295 runs for the tasks shows an important interest in this benchmarking campaign. We expect the new tasks to attract at least as many researchers for 2021

    Overview of the ImageCLEF 2021: Multimedia Retrieval in Medical, Nature, Internet and Social Media Applications

    Get PDF
    This paper presents an overview of the ImageCLEF 2021 lab that was organized as part of the Conference and Labs of the Evaluation Forum – CLEF Labs 2021. ImageCLEF is an ongoing evaluation initiative (first run in 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval of visual data with the aim of providing information access to large collections of images in various usage scenarios and domains. In 2021, the 19th edition of ImageCLEF runs four main tasks: (i) a medical task that groups three previous tasks, i.e., caption analysis, tuberculosis prediction, and medical visual question answering and question generation, (ii) a nature coral task about segmenting and labeling collections of coral reef images, (iii) an Internet task addressing the problems of identifying hand-drawn and digital user interface components, and (iv) a new social media aware task on estimating potential real-life effects of online image sharing. Despite the current pandemic situation, the benchmark campaign received a strong participation with over 38 groups submitting more than 250 runs

    Overview of the ImageCLEFmed 2021 concept & caption prediction task

    Get PDF
    The 2021 ImageCLEF concept detection and caption prediction task follows similar challenges that werealready run from 2017–2020. The objective is to extract UMLS-concept annotations and/or captionsfrom the image data that are then compared against the original text captions of the images. The usedimages are clinically relevant radiology images and the describing captions were created by medicalexperts. In the caption prediction task, lexical similarity with the original image captions is evaluatedwith the BLEU-score. In the concept detection task, UMLS (Unified Medical Language System) termsare extracted from the original text captions and compared against the predicted concepts in a multi-label way. The F1-score was used to assess the performance. The 2021 task has been conducted incollaboration with the Visual Question Answering task and used the same images. The task attracteda strong participation with 25 registered teams. In the end 10 teams submitted 75 runs for the two subtasks. Results show that there is a variety of used techniques that can lead to good prediction resultsfor the two tasks. In comparison to earlier competitions, more modern deep learning architectures likeEfficientNets and Transformer-based architectures for text or images were used

    Ensemble Deep Learning Architectures for Automated Diagnosis of Pulmonary Tuberculosis using Chest X-ray

    No full text
    Tuberculosis (TB) is still a serious public health concern across the world, causing 1.4 million deaths each year. However, there has been a scarcity of radiological interpretation skills in many TB-infected locations, which may cause poor diagnosis rates and poor patient outcomes. A cost-effective and efficient automated technique might help screening evaluations in underprivileged countries and provide early illness diagnosis. In this work, we proposed a deep ensemble learning framework that integrates multisource data of two deep learning-based techniques for the automated diagnosis of TB. The integrated model framework has been tested on two publicly available datasets and one private dataset. While both proposed deep learning-based automated detection systems have shown high accuracy and specificity compared to state-of-the-art, the en- semble method significantly improved prediction accuracy in detecting chest radiographs with active pulmonary TB from a multi-ethnic patient cohort. Extensive experiments were used to validate the methodology, and the results were superior to previous approaches, showing the method’s practicality for application in the real world. By integrating supervised prediction and unsupervised representation, the ensemble method accu- rately classified TB with the area under the receiver operating characteristic (AUROC) up to 0.98 using chest radiography outperforming the other tested classifiers and achieving state- of-the-art. The methodology and findings provide a viable route for more accurate and quicker TB detection, especially in low and middle-income nations. </p
    corecore